Multi-agent systems/Systèmes multi-agents in areas (2024-03-15)
Andreas Kalaitzakis, Jérôme Euzenat, À quoi sert la spécialisation en évolution culturelle de la connaissance?, in: Maxime Morge (éd), Actes 31e journées francophones sur Systèmes multi-agent (JFSMA), Strasbourg (FR), pp76-85, 2023
Des agents peuvent faire évoluer leurs ontologies en accomplissant conjointement une tâche. Nous considérons un ensemble de tâches dont chaque agent ne considère qu'une partie. Nous supposons que moins un agent considère de tâches, plus la précision de sa meilleure tâche sera élevée. Pour le vérifier, nous simulons différentes populations considérant un nombre de tâches croissant. De manière contre-intuitive, l'hypothèse n'est pas vérifiée. D'une part, lorsque les agents ont une mémoire illimitée, plus un agent considère de tâches, plus il est précis. D'autre part, lorsque les agents ont une mémoire limitée, les objectifs de maximiser la précision de leur meilleures tâches et de s'accorder entre eux sont mutuellement exclusifs. Lorsque les sociétés favorisent la spécialisation, les agents n'améliorent pas leur précision. Cependant, ces agents décideront plus souvent en fonction de leurs meilleures tâches, améliorant ainsi la performance de leur société.
Evolution culturelle de la connaissance, Simulation multi-agents, Spécialisation des agents
Andreas Kalaitzakis, Jérôme Euzenat, Multi-tasking resource-constrained agents reach higher accuracy when tasks overlap, in: Proc. 20th European conference on multi-agents systems (EUMAS), Napoli (IT), (Vadim Malvone, Aniello Murano (eds), Proc. 20th European conference on multi-agents systems (EUMAS), Lecture notes in computer science 14282, 2023), pp425-434, 2023
Agents have been previously shown to evolve their ontologies while interacting over a single task. However, little is known about how interacting over several tasks affects the accuracy of agent ontologies. Is knowledge learned by tackling one task beneficial for another task? We hypothesize that multi-tasking agents tackling tasks that rely on the same properties, are more accurate than multi-tasking agents tackling tasks that rely on different properties. We test this hypothesis by varying two parameters. The first parameter is the number of tasks assigned to the agents. The second parameter is the number of common properties among these tasks. Results show that when deciding for different tasks relies on the same properties, multi-tasking agents reach higher accuracy. This suggests that when agents tackle several tasks, it is possible to transfer knowledge from one task to another.
Cultural knowledge evolution, Knowledge transfer, Multi-tasking
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Raising awareness without disclosing truth, Annals of mathematics and artificial intelligence 91(4):431-464, 2023
Agents use their own vocabularies to reason and talk about the world. Public signature awareness is satisfied if agents are aware of the vocabularies, or signatures, used by all agents they may, eventually, interact with. Multi-agent modal logics and in particular Dynamic Epistemic Logic rely on public signature awareness for modeling information flow in multi-agent systems. However, this assumption is not desirable for dynamic and open multi-agent systems because (1) it prevents agents to use unique signatures other agents are unaware of, (2) it prevents agents to openly extend their signatures when encountering new information, and (3) it requires that all future knowledge and beliefs of agents are bounded by the current state. We propose a new semantics for awareness that enables us to drop public signature awareness. This semantics is based on partial valuation functions and weakly reflexive relations. Dynamics for raising public and private awareness are then defined in such a way as to differentiate between becoming aware of a proposition and learning its truth value. With this, we show that knowledge and beliefs are not affected through the raising operations.
Awareness, Raising awareness, Dynamic epistemic logic, Partial valuations, Multi-agent systems
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge transmission and improvement across generations do not need strong selection, in: Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew Taylor (eds), Proc. 21st ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (Online), pp163-171, 2022
Agents have been used for simulating cultural evolution and cultural evolution can be used as a model for artificial agents. Previous results have shown that horizontal, or intra-generation, knowledge transmission allows agents to improve the quality of their knowledge to a certain level. Moreover, variation generated through vertical, or inter-generation, transmission allows agents to exceed that level. Such results were obtained under specific conditions such as the drastic selection of agents allowed to transmit their knowledge, seeding the process with correct knowledge or introducing artificial noise during transmission. Here, we question the necessity of such measures and study their impact on the quality of transmitted knowledge. For that purpose, we combine the settings of two previous experiments and relax these conditions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). The rationale is that if interactions lead agents to improve their overall knowledge quality, this should be sufficient to ensure correct knowledge transmission, and that transmission mechanisms are sufficiently imperfect to produce variation. In this setting, we confirm that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also show that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough. This means that it is not necessary to take the most successful agents as teachers, neither in vertical nor horizontal transmission, to cumulatively improve knowledge.
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Transmission de connaissances et sélection, in: Valérie Camps (éd), Actes 30e journées francophones sur Systèmes multi-agent (JFSMA), Saint-Étienne (FR), pp63-72, 2022
Les agents peuvent être utilisés pour simuler l'évolution culturelle et l'évolution culturelle peut être utilisée comme modèle pour les agents artificiels. Des expériences ont montré que la transmission intragénérationnelle des connaissances permet aux agents d'en améliorer la qualité. De plus, sa transmission intergénérationnelle permet de dépasser ce niveau. Ces résultats ont été obtenus dans des conditions particulières: sélection drastique des agents transmetant leurs connaissances, initialisation avec des connaissances correctes ou introduction de bruit lors de la transmission. Afin d'étudier l'impact de ces mesures sur la qualité de la connaissance transmise, nous combinons les paramètres de deux expériences précédentes et relâchons ces conditions. Ce dispositif confirme que la transmission verticale permet d'améliorer la qualité de la connaissance obtenue par transmission horizontale même sans sélection drastique et apprentissage orienté. Il montre également qu'une transmission intragénérationnelle suffisante peut compenser l'absence de sélection parentale.
Simulation sociale multi-agents, Évolution culturelle, Transmission des connaissances, Génération d'agents, Évolution culturelle des connaissances
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Inter-generation knowledge transmission without individual selection, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
cultural knowledge evolution, vertical transmission, horizontal transmission, multi-agent simulation, knowledge accuracy
Jérôme Euzenat, Can AI systems culturally evolve their knowledge?, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
agent-based models, computational cultural knowledge evolution, artificial intelligence
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge improvement and diversity under interaction-driven adaptation of learned ontologies, in: Ulle Endriss, Ann Nowé, Frank Dignum, Alessio Lomuscio (eds), Proc. 20th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), London (UK), pp242-250, 2021
When agents independently learn knowledge, such as ontologies, about their environment, it may be diverse, incorrect or incomplete. This knowledge heterogeneity could lead agents to disagree, thus hindering their cooperation. Existing approaches usually deal with this interaction problem by relating ontologies, without modifying them, or, on the contrary, by focusing on building common knowledge. Here, we consider agents adapting ontologies learned from the environment in order to agree with each other when cooperating. In this scenario, fundamental questions arise: Do they achieve successful interaction? Can this process improve knowledge correctness? Do all agents end up with the same ontology? To answer these questions, we design a two-stage experiment. First, agents learn to take decisions about the environment by classifying objects and the learned classifiers are turned into ontologies. In the second stage, agents interact with each other to agree on the decisions to take and modify their ontologies accordingly. We show that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology.
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Line van den Berg, Manuel Atencia, Jérôme Euzenat, A logical model for the ontology alignment repair game, Autonomous agents and multi-agent systems 35(2):32, 2021
Ontology alignments enable agents to communicate while preserving heterogeneity in their knowledge. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. The Alignment Repair Game (ARG) has been proposed for agents to simultaneously communicate and repair their alignments through adaptation operators when communication failures occur. ARG has been evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant could not be established by experiments. We introduce a logical model, Dynamic Epistemic Ontology Logic (DEOL), that enables us to answer these questions. This framework allows us (1) to express the ontologies and alignments used via a faithful translation from ARG to DEOL, (2) to model the ARG adaptation operators as dynamic modalities and (3) to formally define and establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Ontology alignment, Alignment repair, Multi-agent systems, Agent communication, Dynamic Epistemic Logic
Jérôme Euzenat, A map without a legend: the semantic web and knowledge evolution, Semantic web journal 11(1):63-68, 2020
The current state of the semantic web is focused on data. This is a worthwhile progress in web content processing and interoperability. However, this does only marginally contribute to knowledge improvement and evolution. Understanding the world, and interpreting data, requires knowledge. Not knowledge cast in stone for ever, but knowledge that can seamlessly evolve; not knowledge from one single authority, but diverse knowledge sources which stimulate confrontation and robustness; not consistent knowledge at web scale, but local theories that can be combined. We discuss two different ways in which semantic web technologies can greatly contribute to the advancement of knowledge: semantic eScience and cultural knowledge evolution.
Semantic web, Linked data, Big data, Open data, Knowledge representation, Knowledge, Ontology, Machine learning, Reproducible research, eScience, Cultural evolution
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Agent ontology alignment repair through dynamic epistemic logic, in: Bo An, Neil Yorke-Smith, Amal El Fallah Seghrouchni, Gita Sukthankar (eds), Proc. 19th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Auckland (NZ), pp1422-1430, 2020
Ontology alignments enable agents to communicate while preserving heterogeneity in their information. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. In the Alignment Repair Game (ARG) this evolution is achieved via adaptation operators. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant is still an open question. In this paper, we introduce a formal framework based on Dynamic Epistemic Logic that allows us to answer this question. This framework allows us (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Ontology alignment, Alignment repair, Agent communication, Dynamic Epistemic Logic
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Unawareness in multi-agent systems with partial valuations, in: Proc. 10th AAMAS workshop on Logical Aspects of Multi-Agent Systems (LAMAS), Auckland (NZ), 2020
Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other's signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model awareness with partial valuations that drops public signature awareness and can model agent signature unawareness, and we give a first view on defining the dynamics of raising and forgetting awareness on this framework.
Awareness, Dynamic Epistemic Logic, Partial valuations, Multi-agent systems
Jérôme Euzenat, De la langue à la connaissance: approche expérimentale de l'évolution culturelle, Bulletin de l'AFIA 100:9-12, 2018
Jérôme Euzenat, Interaction-based ontology alignment repair with expansion and relaxation, in: Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU), pp185-191, 2017
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these experiments and, using new measures, show that previous results were underestimated. We introduce new adaptation operators that improve those previously considered. We also allow agents to go beyond the initial operators in two ways: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) Agents still converge to a state in which no mistake occurs. (2) They achieve results far closer to the correct alignments than previously found. (3) They reach again 100% precision and coherent alignments.
The results reported in this paper for operators addjoin and refadd are not accurate, due to a software error. The results reported were worse than they should have been. Updated results can be found in [
20180308-NOOR], [
20180311-NOOR] and [
20180529-NOOR].
Jérôme Euzenat, Crafting ontology alignments from scratch through agent communication, in: Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Nice (FR), (Bo An, Ana Bazzan, João Leite, Serena Villata, Leendert van der Torre (eds), Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Lecture notes in computer science 10621, 2017), pp245-262, 2017
Agents may use different ontologies for representing knowledge and take advantage of alignments between ontologies in order to communicate. Such alignments may be provided by dedicated algorithms, but their accuracy is far from satisfying. We already explored operators allowing agents to repair such alignments while using them for communicating. The question remained of the capability of agents to craft alignments from scratch in the same way. Here we explore the use of expanding repair operators for that purpose. When starting from empty alignments, agents fails to create them as they have nothing to repair. Hence, we introduce the capability for agents to risk adding new correspondences when no existing one is useful. We compare and discuss the results provided by this modality and show that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication.
Ontology alignment, Alignment repair, Cultural knowkedge evolution, Agent simulation, Coherence, Network of ontologies
Jérôme Euzenat, Knowledge diversity under socio-environmental pressure, in: Michael Rovatsos (ed), Investigating diversity in AI: the ESSENCE project, 2013-2017, Deliverable, ESSENCE, 62p., 2017, pp28-30
Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity.